Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
CoTの提案論文
a chain of thought -- a series of intermediate reasoning steps -- (Abstarct)
we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting.
Figure 1
Standard Prompting
質問に対する答えだけを1例示す
Chain-of-Thought Prompting
質問と答えと導出過程を1例示す
Figure 3: Examples of 〈input, chain of thought, output〉 triples
ダイレクトに答えを出すのではなく、中間(思考の過程)を出力させて、性能を上げる
思考の過程を出すだけなのに性能が上がる
Figure 2:20〜30%上がっている
Figure 4
モデルが大きいほどCoTが威力発揮
算数の文章題が難しいと言われていたが、CoTにより性能向上した
coinflip task
コインを複数回裏返したあと、いま表か裏か
Table 14